In this paper, the problem of one-bit compressed sensing (OBCS) is formulatedas a problem in probably approximately correct (PAC) learning. It is shown thatthe Vapnik-Chervonenkis (VC-) dimension of the set of half-spaces in$\mathbb{R}^n$ generated by $k$-sparse vectors is bounded below by $k \lg(n/k)$ and above by $2k \lg (n/k)$, plus some round-off terms. By coupling thisestimate with well-established results in PAC learning theory, we show that aconsistent algorithm can recover a $k$-sparse vector with $O(k \lg (n/k))$measurements, given only the signs of the measurement vector. This result holdsfor \textit{all} probability measures on $\mathbb{R}^n$. It is further shownthat random sign-flipping errors result only in an increase in the constant inthe $O(k \lg (n/k))$ estimate. Because constructing a consistent algorithm isnot straight-forward, we present a heuristic based on the $\ell_1$-norm supportvector machine, and illustrate that its computational performance is superiorto a currently popular method.
展开▼
机译:在本文中,一位压缩感知(OBCS)问题被表述为大概近似正确(PAC)学习中的问题。结果表明,由$ k $稀疏向量生成的$ \ mathbb {R} ^ n $中的半空间集合的Vapnik-Chervonenkis(VC-)维在下面受$ k \ lg(n / k)限制$及以上$ 2k \ lg(n / k)$,再加上一些四舍五入条件。通过将此估计与PAC学习理论中公认的结果相结合,我们证明了一致的算法可以用$ O(k \ lg(n / k))$测量值恢复$ k $稀疏矢量,仅给出测量的符号向量。该结果适用于$ \ mathbb {R} ^ n $上的\ textit {all}概率度量。进一步表明,随机符号翻转误差仅导致$ O(k \ lg(n / k))$估计中的常数增加。因为构造一致的算法不是直接的,所以我们提出一种基于$ \ ell_1 $-范数支持向量机的启发式方法,并说明其计算性能优于当前流行的方法。
展开▼